internal working
Review for NeurIPS paper: The Origins and Prevalence of Texture Bias in Convolutional Neural Networks
The paper is very well written, seemingly involves a massive amount of wordload, and answers most of the questions clearly with evidence and offer conjectures of the unanswerable questions to guide future research. Despite the high quality, I noticed several drawbacks and suggest the authors to address them. In the abstract, the paper says the differences "arise not from differences in their internal workings, but from differences in the data that they see", which seems to suggest that whether the model learns texture or shape primarily depends on the data seen, yet in the experiments, the authors demonstrate that, with more carefully designed regularizations (termed as "self-supervised losses" in the paper), the model can be pushed to focus more on the shape. This empirical observation seems to contradict with the main claim in the abstract since I suppose losses are one of the "internal workings" (or what does "internal workings" mean exactly?). I suggest the authors to revise corresponding texts to reflect this more accurately.
Artificial intelligence sheds light on how the brain processes language: Neuroscientists find the internal workings of next-word prediction models resemble those of language-processing centers in the brain
The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion. Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain. Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.
Probability VS Likelihood
The biggest problem which restricts someone to understand the concepts of Data Science is the wrong approach towards learning & understanding it. This new approach of learning data science is completely wrong, & according to me it is the worst approach, but the sad reality is that most of the people are following this wrong/worst approach only. One hype is created in the mindset of the majority of the people, & the hype is that at the majority of the places/situations/scenarios Deep Learning helps, then at few situations, Machine Learning helps, & at the last, at very few situations, Statistics helps. This hype/mindset is completely wrong & the reality is exactly opposite to this hype. In reality, Deep Learning is the subset of the Machine Learning, & Machine Learning itself is completely dependent on Statistics.
Google's What-If Tool And The Future Of Explainable AI
Art exhibition "Waterfall of Meaning" by Google PAIR displayed at the Barbican Curve Gallery. The rise of deep learning has been defined by a shift away from transparent and understandable human-written code towards sealed black boxes whose creators have little understanding of how or even why they yield the results they do. Concerns over bias, brittleness and flawed representations have led to growing interest in the area of "explainable AI" in which frameworks help interrogate a model's internal workings to shed light on precisely what it has learned about the world and help its developers nudge it towards a fairer and more faithful internal representation. As companies like Google roll out a growing stable of explainable AI tools like its What-If Tool, perhaps a more transparent and understandable deep learning future can help address the limitations that have slowed the field's deployment. Since the dawn of the computing revolution, the underlying programming that guided those mechanical thinking machines was provided by humans through transparent and visible instruction sets.
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.84)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.84)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.64)
Build Deeper: What's in the Book
So, let's see what I've covered in the book. Build Deeper: The Path to Deep Learning The new book is the successor to my earlier book - Build Deeper: Deep Learning Beginners' Guide - (which is why I called this the'second edition), to which I've added a lot more topics this time. The new book is more than twice the length of the old book, and covers more breadth and depth in Deep Learning. Here's what you can expect in the book: A detailed explanation on what Deep Learning is, what it isn't, and how it relates to other areas in AI. What Deep Learning has achieved through the years, including recent achievements such as OpenAI, and DeepMind.
AI Creates Jobs. Want Proof? Here's Haptik's Story. - Haptik Blog
The growth of Artificial Intelligence has spawned a thousand foes and a thousand friends. AI has drawn a line in the sand when it comes to the future of jobs. And it seems like most of the world has picked a stand either for or against it. So what does Haptik have to say, being a company at the forefront of the paradigm shift to Artificial Intelligence? Will people from non-technical backgrounds have job opportunities when Artificial Intelligence (AI) takes over? Can one actually justify this claim?
Is your software racist?
Late last year, a St. Louis tech executive named Emre Şarbak noticed something strange about Google Translate. He was translating phrases from Turkish -- a language that uses a single gender-neutral pronoun "o" instead of "he" or "she." But when he asked Google's tool to turn the sentences into English, they seemed to read like a children's book out of the 1950's. The ungendered Turkish sentence "o is a nurse" would become "she is a nurse," while "o is a doctor" would become "he is a doctor." The website Quartz went on to compose a sort-of poem highlighting some of these phrases; Google's translation program decided that soldiers, doctors and entrepreneurs were men, while teachers and nurses were women.
- North America > United States > New York (0.05)
- North America > United States > Utah (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (2 more...)
- Media (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
A tour of random forests
Random forests are an excellent "out of the box" tool for machine learning with many of the same advantages that have made neural nets so popular. They are able to capture non-linear and non-monotonic functions, are invariant to the scale of input data, are robust to missing values, and do "automatic" feature extraction. Additionally, they have other benefits that neural nets do not. What follows is a look into how random forests work, how they may be usefully applied, and a discussion of some situations in which they may be preferable to neural networks. So how do random forests work?